theoretical and empirical evidence
Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence
Deep neural networks have received dramatic success based on the optimization method of stochastic gradient descent (SGD). However, it is still not clear how to tune hyper-parameters, especially batch size and learning rate, to ensure good generalization. This paper reports both theoretical and empirical evidence of a training strategy that we should control the ratio of batch size to learning rate not too large to achieve a good generalization ability. Specifically, we prove a PAC-Bayes generalization bound for neural networks trained by SGD, which has a positive correlation with the ratio of batch size to learning rate. This correlation builds the theoretical foundation of the training strategy. Furthermore, we conduct a large-scale experiment to verify the correlation and training strategy. We trained 1,600 models based on architectures ResNet-110, and VGG-19 with datasets CIFAR-10 and CIFAR-100 while strictly control unrelated variables. Accuracies on the test sets are collected for the evaluation. Spearman's rank-order correlation coefficients and the corresponding $p$ values on 164 groups of the collected data demonstrate that the correlation is statistically significant, which fully supports the training strategy.
Reviews: Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence
Theory-wise, the authors overlooked to discuss several prior works, some of which suggested opposite theories to theirs. For example: - "Don't Decay the Learning Rate, Increase the Batch Size", ICLR'18, seems to support a constant batch size/lr ratio empirically --- after rebuttal --- After reading the comments and the authors rebuttal, I am satisfied with the responses. The paper theoretically verifies that the ratio of batch size to learning rate is positively related to the generalization error. Specifically, it verifies some very recent empirical findings, e.g., Don't decay the learning rate, increase the batch size, ICLR 2018, which empirically states that increasing the batch size and decaying the learning rate are quantitatively equivalent. I think the theoretical result is novel and timely and would interest many readers in the deep learning community.
Reviews: Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence
The paper proves a new upper bound for the generalization ability of algorithms trained by SGD, which demonstrate a negative correlation with the ratio of batch size to learning rate. The authors conducted experiments to verify the theoretical findings on a large number of models. The reviewers have mixed opinions on the paper. On one hand, the paper studies an important problem to the deep learning community, and the theoretical result has its uniqueness (e.g., regarding the ratio of batch size to learning rate), although some discussions on its correlation with previous PAC bounds are missing and some assumptions in the theory need more justifications. On the other hand, the suggestions resulting from the experiments (e.g., always increase the learning rate) seem not very reasonable and need more empirical verifications.
Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence
Deep neural networks have received dramatic success based on the optimization method of stochastic gradient descent (SGD). However, it is still not clear how to tune hyper-parameters, especially batch size and learning rate, to ensure good generalization. This paper reports both theoretical and empirical evidence of a training strategy that we should control the ratio of batch size to learning rate not too large to achieve a good generalization ability. Specifically, we prove a PAC-Bayes generalization bound for neural networks trained by SGD, which has a positive correlation with the ratio of batch size to learning rate. This correlation builds the theoretical foundation of the training strategy.
Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence
He, Fengxiang, Liu, Tongliang, Tao, Dacheng
Deep neural networks have received dramatic success based on the optimization method of stochastic gradient descent (SGD). However, it is still not clear how to tune hyper-parameters, especially batch size and learning rate, to ensure good generalization. This paper reports both theoretical and empirical evidence of a training strategy that we should control the ratio of batch size to learning rate not too large to achieve a good generalization ability. Specifically, we prove a PAC-Bayes generalization bound for neural networks trained by SGD, which has a positive correlation with the ratio of batch size to learning rate. This correlation builds the theoretical foundation of the training strategy. Furthermore, we conduct a large-scale experiment to verify the correlation and training strategy.